快速生产具有纳米分辨率的大面积模式对于已建立的半导体行业和实现下一代量子设备的工业规模生产至关重要。具有二进制全息掩模的亚稳定原子光刻被认为是当前最新水平的较高分辨率/低成本替代方法:极端紫外线(EUV)光刻。然而,最近表明,亚稳定原子与掩模材料(SIN)的相互作用导致波前的强烈扰动,而不是基于经典标量波。这意味着即使在1D中也无法在分析上解决逆问题(基于所需模式创建掩码)。在这里,我们提出了一种机器学习方法,以掩盖产生的目标是亚稳定性原子。我们的算法结合了遗传优化和深度学习来获得面具。一种新型的深神经结构经过训练,可以产生面膜的初始近似。然后,该近似值用于生成可以收敛到任意精度的遗传优化算法的初始种群。我们证明了Fraunhofer近似极限内系统维度的任意1D模式的产生。
translated by 谷歌翻译
在本文中,我们介绍了Foodi-ML数据集。该数据集包含超过150万个唯一图像,超过9.5m的商店名称,产品名称描述以及从Glovo应用程序收集的收集部分。提供的数据对应于来自欧洲,中东,非洲和拉丁美洲37个国家的食品,饮料和杂货产品。该数据集理解了33种语言,其中包括来自东欧和西亚州(例如乌克兰语和哈萨克)国家 /地区的870k语言样本,这些语言在迄今为止在公开可用的Visio语言数据集中所占的不足。该数据集还包括说话语言,例如西班牙语和英语。为了帮助进一步的研究,我们在两项任务上包括基准:文本图像检索和有条件的图像产生。
translated by 谷歌翻译
Estimating the pose of an object from a monocular image is an inverse problem fundamental in computer vision. The ill-posed nature of this problem requires incorporating deformation priors to solve it. In practice, many materials do not perceptibly shrink or extend when manipulated, constituting a powerful and well-known prior. Mathematically, this translates to the preservation of the Riemannian metric. Neural networks offer the perfect playground to solve the surface reconstruction problem as they can approximate surfaces with arbitrary precision and allow the computation of differential geometry quantities. This paper presents an approach to inferring continuous deformable surfaces from a sequence of images, which is benchmarked against several techniques and obtains state-of-the-art performance without the need for offline training.
translated by 谷歌翻译
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
translated by 谷歌翻译
A generalized understanding of protein dynamics is an unsolved scientific problem, the solution of which is critical to the interpretation of the structure-function relationships that govern essential biological processes. Here, we approach this problem by constructing coarse-grained molecular potentials based on artificial neural networks and grounded in statistical mechanics. For training, we build a unique dataset of unbiased all-atom molecular dynamics simulations of approximately 9 ms for twelve different proteins with multiple secondary structure arrangements. The coarse-grained models are capable of accelerating the dynamics by more than three orders of magnitude while preserving the thermodynamics of the systems. Coarse-grained simulations identify relevant structural states in the ensemble with comparable energetics to the all-atom systems. Furthermore, we show that a single coarse-grained potential can integrate all twelve proteins and can capture experimental structural features of mutated proteins. These results indicate that machine learning coarse-grained potentials could provide a feasible approach to simulate and understand protein dynamics.
translated by 谷歌翻译
Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译
Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move. This information is useful to make inferences about 3D shape, physical properties and object interactions. While the problem of tracking arbitrary physical points on surfaces over longer video clips has received some attention, no dataset or benchmark for evaluation existed, until now. In this paper, we first formalize the problem, naming it tracking any point (TAP). We introduce a companion benchmark, TAP-Vid, which is composed of both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. Central to the construction of our benchmark is a novel semi-automatic crowdsourced pipeline which uses optical flow estimates to compensate for easier, short-term motion like camera shake, allowing annotators to focus on harder sections of video. We validate our pipeline on synthetic data and propose a simple end-to-end point tracking model TAP-Net, showing that it outperforms all prior methods on our benchmark when trained on synthetic data.
translated by 谷歌翻译
自成立以来,建立在广泛任务中表现出色的普通代理的任务一直是强化学习的重要目标。这个问题一直是对Alarge工作体系的研究的主题,并且经常通过观察Atari 57基准中包含的广泛范围环境的分数来衡量的性能。 Agent57是所有57场比赛中第一个超过人类基准的代理商,但这是以数据效率差的代价,需要实现近800亿帧的经验。以Agent57为起点,我们采用了各种各样的形式,以降低超过人类基线所需的经验200倍。在减少数据制度和Propose有效的解决方案时,我们遇到了一系列不稳定性和瓶颈,以构建更强大,更有效的代理。我们还使用诸如Muesli和Muzero之类的高性能方法证明了竞争性的性能。 TOOUR方法的四个关键组成部分是(1)近似信任区域方法,该方法可以从TheOnline网络中稳定引导,(2)损失和优先级的归一化方案,在学习具有广泛量表的一组值函数时,可以提高鲁棒性, (3)改进的体系结构采用了NFNET的技术技术来利用更深的网络而无需标准化层,并且(4)政策蒸馏方法可使瞬时贪婪的策略加班。
translated by 谷歌翻译
机器人布操作是自动机器人系统的相关挑战性问题。高度可变形的对象,因为纺织品在操纵过程中可以采用多种配置和形状。因此,机器人不仅应该了解当前的布料配置,还应能够预测布的未来行为。本文通过使用模型预测控制(MPC)策略在对象的其他部分应用动作,从而解决了间接控制纺织对象某些点的配置的问题,该策略还允许间接控制的行为点。设计的控制器找到了最佳控制信号,以实现所需的未来目标配置。本文中的探索场景考虑了通过抓住其上角,以平方布的下角跟踪参考轨迹。为此,我们提出并验证线性布模型,该模型允许实时解决与MPC相关的优化问题。增强学习(RL)技术用于学习所提出的布模型的最佳参数,并调整所得的MPC。在模拟中获得准确的跟踪结果后,在真实的机器人中实现并执行了完整的控制方案,即使在不利条件下也可以获得准确的跟踪。尽管总观察到的误差达到5 cm标记,但对于30x30 cm的布,分析表明,MPC对该值的贡献少于30%。
translated by 谷歌翻译
胶囊网络(CAPSNET)旨在将图像解析为由对象,部分及其关系组成的层次组件结构。尽管它们具有潜力,但它们在计算上还是很昂贵的,并且构成了一个主要的缺点,这限制了在更复杂的数据集中有效利用这些网络的限制。当前的CAPSNET模型仅将其性能与胶囊基线进行比较,并且在复杂任务上的基于CNN的DEEP基于DEEP基于CNN的级别的性能。本文提出了一种学习胶囊的有效方法,该胶囊通过一组子封装来检测输入图像的原子部分,并在其上投射输入向量。随后,我们提出了Wasserstein嵌入模块,该模块首先测量由子胶囊建模的输入和组件之间的差异,然后根据学习的最佳运输找到它们的对齐程度。该策略利用基于其各自的组件分布之间的相似性来定义输入和子胶囊之间的一致性的新见解。我们提出的模型(i)是轻量级的,允许将胶囊应用于更复杂的视觉任务; (ii)在这些具有挑战性的任务上的表现要好于或与基于CNN的模型相提并论。我们的实验结果表明,Wasserstein嵌入胶囊(Wecapsules)在仿射转换方面更加强大,有效地扩展到较大的数据集,并且在几个视觉任务中胜过CNN和CAPSNET模型。
translated by 谷歌翻译